|
In probability theory and statistics, the definition of variance is either the expected value (when considering a theoretical distribution), or average value (for actual experimental data), of squared deviations from the mean. Computations for analysis of variance involve the partitioning of a sum of squared deviations. An understanding of the complex computations involved is greatly enhanced by a detailed study of the statistical value: : It is well known that for a random variable with mean and variance : : 〔Mood & Graybill: ''An introduction to the Theory of Statistics'' (McGraw Hill)〕 Therefore : From the above, the following are easily derived: : : If is a vector of n predictions, and is the vector of the true values, then the SSE of the predictor is: == Sample variance == (詳細はsample variance (before deciding whether to divide by ''n'' or ''n'' − 1) is most easily calculated as : From the two derived expectations above the expected value of this sum is : which implies : This effectively proves the use of the divisor ''n'' − 1 in the calculation of an unbiased sample estimate of ''σ''2. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「squared deviations」の詳細全文を読む スポンサード リンク
|